random ticket
- Research Report (0.97)
- Contests & Prizes (0.94)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > Canada (0.04)
- (2 more...)
- Research Report (0.97)
- Contests & Prizes (0.94)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > Canada (0.04)
- (2 more...)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > Canada (0.04)
- Asia > China > Guangdong Province (0.04)
- Asia > China > Beijing > Beijing (0.04)
Reviews: One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers
I think that the finding that LT can generalise (I use the word "can" because it does not seem that this is true consistently) is an interesting one, and with some changes, this paper would deserve publication at a top venue like NeurIPS. However, I think we still see things differently on two points. Firstly, I do not believe that comparison to existing algorithms is orthogonal to the topic of this paper. You claim that "... we may be able to generate new initialization schemes which can substantially improve training of neural networks from scratch" and I agree, but the point I am making is that there are other ways of obtaining a better initialisation (e.g., unsupervised pretraining and/or layer-wise pretraining) which are known to improve performance and speed up converge, some of them using less computation than is required to generate a lottery ticket. I view your algorithm as yet another way of generating a good init using some data which yields good performance, potentially with other benefits like compression, after some amount of fine-tuning (the fact that LT is trained from scratch and thus require more fine-tuning than using trained weights seems like a drawback, not advantage, from this viewpoint).
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot
Network pruning is a method for reducing test-time computational resource requirements with minimal performance degradation. Conventional wisdom of pruning algorithms suggests that: (1) Pruning methods exploit information from training data to find good subnetworks; (2) The architecture of the pruned network is crucial for good performance. In this paper, we conduct sanity checks for the above beliefs on several recent unstructured pruning methods and surprisingly find that: (1) A set of methods which aims to find good subnetworks of the randomly-initialized network (which we call initial tickets''), hardly exploits any information from the training data; (2) For the pruned networks obtained by these methods, randomly changing the preserved weights in each layer, while keeping the total number of preserved weights unchanged per layer, does not affect the final performance. These findings inspire us to choose a series of simple \emph{data-independent} prune ratios for each layer, and randomly prune each layer accordingly to get a subnetwork (which we callrandom tickets''). Experimental results show that our zero-shot random tickets outperforms or attains similar performance compared to existing initial tickets''.
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot
Su, Jingtong, Chen, Yihang, Cai, Tianle, Wu, Tianhao, Gao, Ruiqi, Wang, Liwei, Lee, Jason D.
Network pruning is a method for reducing test-time computational resource requirements with minimal performance degradation. Conventional wisdom of pruning algorithms suggests that: (1) Pruning methods exploit information from training data to find good subnetworks; (2) The architecture of the pruned network is crucial for good performance. In this paper, we conduct sanity checks for the above beliefs on several recent unstructured pruning methods and surprisingly find that: (1) A set of methods which aims to find good subnetworks of the randomly-initialized network (which we call "initial tickets"), hardly exploits any information from the training data; (2) For the pruned networks obtained by these methods, randomly changing the preserved weights in each layer, while keeping the total number of preserved weights unchanged per layer, does not affect the final performance. These findings inspire us to choose a series of simple \emph{data-independent} prune ratios for each layer, and randomly prune each layer accordingly to get a subnetwork (which we call "random tickets"). Experimental results show that our zero-shot random tickets outperform or attain a similar performance compared to existing "initial tickets". In addition, we identify one existing pruning method that passes our sanity checks. We hybridize the ratios in our random ticket with this method and propose a new method called "hybrid tickets", which achieves further improvement. (Our code is publicly available at https://github.com/JingtongSu/sanity-checking-pruning)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- (2 more...)